17 research outputs found

    Proactive Defense Against Physical Denial of Service Attacks using Poisson Signaling Games

    Full text link
    While the Internet of things (IoT) promises to improve areas such as energy efficiency, health care, and transportation, it is highly vulnerable to cyberattacks. In particular, distributed denial-of-service (DDoS) attacks overload the bandwidth of a server. But many IoT devices form part of cyber-physical systems (CPS). Therefore, they can be used to launch "physical" denial-of-service attacks (PDoS) in which IoT devices overflow the "physical bandwidth" of a CPS. In this paper, we quantify the population-based risk to a group of IoT devices targeted by malware for a PDoS attack. In order to model the recruitment of bots, we develop a "Poisson signaling game," a signaling game with an unknown number of receivers, which have varying abilities to detect deception. Then we use a version of this game to analyze two mechanisms (legal and economic) to deter botnet recruitment. Equilibrium results indicate that 1) defenders can bound botnet activity, and 2) legislating a minimum level of security has only a limited effect, while incentivizing active defense can decrease botnet activity arbitrarily. This work provides a quantitative foundation for proactive PDoS defense.Comment: 2017 Conference on Decision and Game Theory for Security (GameSec2017). arXiv admin note: text overlap with arXiv:1703.0523

    A Mean-Field Stackelberg Game Approach for Obfuscation Adoption in Empirical Risk Minimization

    Full text link
    Data ecosystems are becoming larger and more complex due to online tracking, wearable computing, and the Internet of Things. But privacy concerns are threatening to erode the potential benefits of these systems. Recently, users have developed obfuscation techniques that issue fake search engine queries, undermine location tracking algorithms, or evade government surveillance. Interestingly, these techniques raise two conflicts: one between each user and the machine learning algorithms which track the users, and one between the users themselves. In this paper, we use game theory to capture the first conflict with a Stackelberg game and the second conflict with a mean field game. We combine both into a dynamic and strategic bi-level framework which quantifies accuracy using empirical risk minimization and privacy using differential privacy. In equilibrium, we identify necessary and sufficient conditions under which 1) each user is incentivized to obfuscate if other users are obfuscating, 2) the tracking algorithm can avoid this by promising a level of privacy protection, and 3) this promise is incentive-compatible for the tracking algorithm.Comment: IEEE Global SIP Symposium on Control & Information Theoretic Approaches to Privacy and Securit

    Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence

    Full text link
    Deception plays a critical role in many interactions in communication and network security. Game-theoretic models called "cheap talk signaling games" capture the dynamic and information asymmetric nature of deceptive interactions. But signaling games inherently model undetectable deception. In this paper, we investigate a model of signaling games in which the receiver can detect deception with some probability. This model nests traditional signaling games and complete information Stackelberg games as special cases. We present the pure strategy perfect Bayesian Nash equilibria of the game. Then we illustrate these analytical results with an application to active network defense. The presence of evidence forces majority-truthful behavior and eliminates some pure strategy equilibria. It always benefits the deceived player, but surprisingly sometimes also benefits the deceiving player.Comment: IEEE Communications and Network Security (IEEE CNS) 201

    Deception by Design: Evidence-Based Signaling Games for Network Defense

    Full text link
    Deception plays a critical role in the financial industry, online markets, national defense, and countless other areas. Understanding and harnessing deception - especially in cyberspace - is both crucial and difficult. Recent work in this area has used game theory to study the roles of incentives and rational behavior. Building upon this work, we employ a game-theoretic model for the purpose of mechanism design. Specifically, we study a defensive use of deception: implementation of honeypots for network defense. How does the design problem change when an adversary develops the ability to detect honeypots? We analyze two models: cheap-talk games and an augmented version of those games that we call cheap-talk games with evidence, in which the receiver can detect deception with some probability. Our first contribution is this new model for deceptive interactions. We show that the model includes traditional signaling games and complete information games as special cases. We also demonstrate numerically that deception detection sometimes eliminate pure-strategy equilibria. Finally, we present the surprising result that the utility of a deceptive defender can sometimes increase when an adversary develops the ability to detect deception. These results apply concretely to network defense. They are also general enough for the large and critical body of strategic interactions that involve deception.Comment: To be presented at Workshop on the Economics of Information Security (WEIS) 2015, Delft University of Technology, The Netherland

    Modeling and Analysis of Leaky Deception using Signaling Games with Evidence

    Full text link
    Deception plays critical roles in economics and technology, especially in emerging interactions in cyberspace. Holistic models of deception are needed in order to analyze interactions and to design mechanisms that improve them. Game theory provides such models. In particular, existing work models deception using signaling games. But signaling games inherently model deception that is undetectable. In this paper, we extend signaling games by including a detector that gives off probabilistic warnings when the sender acts deceptively. Then we derive pooling and partially-separating equilibria of the game. We find that 1) high-quality detectors eliminate some pure-strategy equilibria, 2) detectors with high true-positive rates encourage more honest signaling than detectors with low false-positive rates, 3) receivers obtain optimal outcomes for equal-error-rate detectors, and 4) surprisingly, deceptive senders sometimes benefit from highly accurate deception detectors. We illustrate these results with an application to defensive deception for network security. Our results provide a quantitative and rigorous analysis of the fundamental aspects of detectable deception.Comment: Submitted Feb 12, 2018 to IEEE Transactions on Information Forensics and Security (IEEE T-IFS

    A Game-Theoretic Taxonomy and Survey of Defensive Deception for Cybersecurity and Privacy

    Full text link
    Cyberattacks on both databases and critical infrastructure have threatened public and private sectors. Ubiquitous tracking and wearable computing have infringed upon privacy. Advocates and engineers have recently proposed using defensive deception as a means to leverage the information asymmetry typically enjoyed by attackers as a tool for defenders. The term deception, however, has been employed broadly and with a variety of meanings. In this paper, we survey 24 articles from 2008-2018 that use game theory to model defensive deception for cybersecurity and privacy. Then we propose a taxonomy that defines six types of deception: perturbation, moving target defense, obfuscation, mixing, honey-x, and attacker engagement. These types are delineated by their information structures, agents, actions, and duration: precisely concepts captured by game theory. Our aims are to rigorously define types of defensive deception, to capture a snapshot of the state of the literature, to provide a menu of models which can be used for applied research, and to identify promising areas for future work. Our taxonomy provides a systematic foundation for understanding different types of defensive deception commonly encountered in cybersecurity and privacy.Comment: To Appear in ACM Cumputing Surveys (CSUR

    Phishing for Phools in the Internet of Things: Modeling One-to-Many Deception using Poisson Signaling Games

    Full text link
    Strategic interactions ranging from politics and pharmaceuticals to e-commerce and social networks support equilibria in which agents with private information manipulate others which are vulnerable to deception. Especially in cyberspace and the Internet of things, deception is difficult to detect and trust is complicated to establish. For this reason, effective policy-making, profitable entrepreneurship, and optimal technological design demand quantitative models of deception. In this paper, we use game theory to model specifically one-to-many deception. We combine a signaling game with a model called a Poisson game. The resulting Poisson signaling game extends traditional signaling games to include 1) exogenous evidence of deception, 2) an unknown number of receivers, and 3) receivers of multiple types. We find closed-form equilibrium solutions for a subset of Poisson signaling games, and characterize the rates of deception that they support. We show that receivers with higher abilities to detect deception can use crowd-defense tactics to mitigate deception for receivers with lower abilities to detect deception. Finally, we discuss how Poisson signaling games could be used to defend against the process by which the Mirai botnet recruits IoT devices in preparation for a distributed denial-of-service attack.Comment: This article was not accepted. It was revised and appears here: arXiv:1707.0370

    iSTRICT: An Interdependent Strategic Trust Mechanism for the Cloud-Enabled Internet of Controlled Things

    Full text link
    The cloud-enabled Internet of controlled things (IoCT) envisions a network of sensors, controllers, and actuators connected through a local cloud in order to intelligently control physical devices. Because cloud services are vulnerable to advanced persistent threats (APTs), each device in the IoCT must strategically decide whether to trust cloud services that may be compromised. In this paper, we present iSTRICT, an interdependent strategic trust mechanism for the cloud-enabled IoCT. iSTRICT is composed of three interdependent layers. In the cloud layer, iSTRICT uses FlipIt games to conceptualize APTs. In the communication layer, it captures the interaction between devices and the cloud using signaling games. In the physical layer, iSTRICT uses optimal control to quantify the utilities in the higher level games. Best response dynamics link the three layers in an overall "game-of-games," for which the outcome is captured by a concept called Gestalt Nash equilibrium (GNE). We prove the existence of a GNE under a set of natural assumptions and develop an adaptive algorithm to iteratively compute the equilibrium. Finally, we apply iSTRICT to trust management for autonomous vehicles that rely on measurements from remote sources. We show that strategic trust in the communication layer achieves a worst-case probability of compromise for any attack and defense costs in the cyber layer.Comment: To appear in IEEE Transactions on Information Forensics and Securit

    Game-Theoretic Analysis of Cyber Deception: Evidence-Based Strategies and Dynamic Risk Mitigation

    Full text link
    Deception is a technique to mislead human or computer systems by manipulating beliefs and information. For the applications of cyber deception, non-cooperative games become a natural choice of models to capture the adversarial interactions between the players and quantitatively characterizes the conflicting incentives and strategic responses. In this chapter, we provide an overview of deception games in three different environments and extend the baseline signaling game models to include evidence through side-channel knowledge acquisition to capture the information asymmetry, dynamics, and strategic behaviors of deception. We analyze the deception in binary information space based on a signaling game framework with a detector that gives off probabilistic evidence of the deception when the sender acts deceptively. We then focus on a class of continuous one-dimensional information space and take into account the cost of deception in the signaling game. We finally explore the multi-stage incomplete-information Bayesian game model for defensive deception for advanced persistent threats (APTs). We use the perfect Bayesian Nash equilibrium (PBNE) as the solution concept for the deception games and analyze the strategic equilibrium behaviors for both the deceivers and the deceivees.Comment: arXiv admin note: text overlap with arXiv:1810.0075

    Proactive Population-Risk Based Defense Against Denial of Cyber-Physical Service Attacks

    Full text link
    While the Internet of things (IoT) promises to improve areas such as energy efficiency, health care, and transportation, it is highly vulnerable to cyberattacks. In particular, DDoS attacks work by overflowing the bandwidth of a server. But many IoT devices form part of cyber-physical systems (CPS). Therefore, they can be used to launch a "physical" denial-of-service attack (PDoS) in which IoT devices overflow the "physical bandwidth" of a CPS. In this paper, we quantify the population-based risk to a group of IoT devices targeted by malware for a PDoS attack. To model the recruitment of bots, we extend a traditional game-theoretic concept and create a "Poisson signaling game." Then we analyze two different mechanisms (legal and economic) to deter botnet recruitment. We find that 1) defenders can bound botnet activity and 2) legislating a minimum level of security has only a limited effect, while incentivizing active defense can decrease botnet activity arbitrarily. This work provides a quantitative foundation for designing proactive defense against PDoS attacks.Comment: This article was not accepted. It has been revised and appears here: arXiv:1707.0370
    corecore